3 research outputs found

    DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks

    Full text link
    We present DeblurGAN, an end-to-end learned method for motion deblurring. The learning is based on a conditional GAN and the content loss . DeblurGAN achieves state-of-the art performance both in the structural similarity measure and visual appearance. The quality of the deblurring model is also evaluated in a novel way on a real-world problem -- object detection on (de-)blurred images. The method is 5 times faster than the closest competitor -- DeepDeblur. We also introduce a novel method for generating synthetic motion blurred images from sharp ones, allowing realistic dataset augmentation. The model, code and the dataset are available at https://github.com/KupynOrest/DeblurGANComment: CVPR 2018 camera-read

    Application of Generative Neural Models for Style Transfer Learning in Fashion

    No full text
    The purpose of this thesis is to analyze different generative adversarial networks for application in fashion. Research of “mode collapse” problem of generative adversarial networks. We studied the theoretical part of the “mode collapse” and conducted experiments on a synthetic toy dataset, and a dataset containing real data from fashion. Due to the developed method, it was possible to achieve visible results of improving the quality of garment generation by solving the problem of collapse
    corecore